翻訳と辞書
Words near each other
・ Existentia
・ Existential clause
・ Existential counselling
・ Existential crisis
・ Existential fallacy
・ Existential generalization
・ Existential graph
・ Existential humanism
・ Existential instantiation
・ Existential migration
・ Existential nihilism
・ Existential phenomenology
・ Existential Psychotherapy
・ Existential Psychotherapy (book)
・ Existential quantification
Existential risk from artificial general intelligence
・ Existential theory of the reals
・ Existential therapy
・ Existentialism
・ Existentialism and Humanism
・ Existentialist anarchism
・ Existentially closed model
・ Existentiell
・ Existenz
・ Existenz (disambiguation)
・ Existenz (journal)
・ Exister
・ Existere
・ Existing visitor optimisation
・ Existir


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Existential risk from artificial general intelligence : ウィキペディア英語版
Existential risk from artificial general intelligence
Existential risk from advanced artificial intelligence is the risk that progress in artificial intelligence (AI) could result in an unrecoverable global catastrophe, such as human extinction. The severity of different AI risk scenarios is widely debated, and rests on a number of unresolved questions about future progress in computer science.
Stuart Russell and Peter Norvig's ''Artificial Intelligence: A Modern Approach'', the standard undergraduate AI textbook, cites the possibility that an AI system's learning function "may cause it to evolve into a system with unintended behavior" as the most serious existential risk from AI technology.
Citing major advances in the field of AI and the potential for AI to have enormous long-term benefits or costs, the 2015 Open Letter on Artificial Intelligence stated:
This letter was signed by a number of leading AI researchers in academia and industry, including Thomas Dietterich, Eric Horvitz, Bart Selman, Francesca Rossi, Yann LeCun, and the founders of Vicarious and Google DeepMind.〔(【引用サイトリンク】url=http://futureoflife.org/misc/open_letter )
==Risk scenarios==
In 2009, experts attended a conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They concluded that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls.〔(Scientists Worry Machines May Outsmart Man ) By JOHN MARKOFF, NY Times, July 26, 2009.〕
Various media sources and scientific groups have noted substantial recent gains in AI functionality and autonomy.〔(Gaming the Robot Revolution: A military technology expert weighs in on Terminator: Salvation )., By P. W. Singer, slate.com Thursday, May 21, 2009.〕〔(Robot takeover ), gyre.org.〕〔(robot page ), engadget.com.〕 Citing work by Nick Bostrom, entrepreneurs Bill Gates and Elon Musk have expressed concerns about the possibility that AI could eventually advance to the point that humans could not control it. AI specialist Stuart Russell summarizes:
Dietterich and Horvitz echo the "Sorcerer's Apprentice" concern in a ''Communications of the ACM'' editorial, emphasizing the need for AI systems that can fluidly and unambiguously solicit human input as needed.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Existential risk from artificial general intelligence」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.